Skip to main content
TrustRadius
Apache Kafka

Apache Kafka

Overview

What is Apache Kafka?

Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The Kafka event streaming platform is used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical…

Read more
Recent Reviews

TrustRadius Insights

Apache Kafka is a widely-used platform that has proven to be invaluable in various industries and applications. It is relied upon by …
Continue reading

Apache Kafka - FTW

9 out of 10
August 21, 2023
Incentivized
We use Apache Kafka as message broker between our two client facing applications. We used ActiveMQ before but it had shortfalls of high …
Continue reading
Read all reviews

Reviewer Pros & Cons

View all pros & cons
Return to navigation

Product Details

What is Apache Kafka?

Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The Kafka event streaming platform is used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Apache Kafka Technical Details

Operating SystemsUnspecified
Mobile ApplicationNo
Return to navigation

Comparisons

View all alternatives
Return to navigation

Reviews and Ratings

(127)

Community Insights

TrustRadius Insights are summaries of user sentiment data from TrustRadius reviews and, when necessary, 3rd-party data sources. Have feedback on this content? Let us know!

Apache Kafka is a widely-used platform that has proven to be invaluable in various industries and applications. It is relied upon by organizations to have real-time communication and keep order information up-to-date. This is particularly useful for organizations that need to process large volumes of data, such as those in the cybersecurity industry. Apache Kafka is also considered the go-to tool for event streaming, generating events and notifying relevant applications for consumption. Additionally, it is used in both first-party and third-party components of applications to address data proliferation and enable efficient notifications.

Another key use case for Apache Kafka is replacing classical messaging software within organizations, becoming the new standard for messaging. This powerful streaming framework plays a crucial role as a queuing mechanism for records in various pipelines, providing a simple yet efficient system for queuing and maintaining records. Moreover, Apache Kafka excels at storing and processing records in dedicated servers, supporting high data loads and offering the ability to replay consumed data. This makes it ideal for buffering incoming records during traffic spikes or in case of data infrastructure failures.

Furthermore, Apache Kafka finds its purpose in driving real-time monitoring by sending log information to feed other applications. Its ability to scale and manage common errors in messaging allows organizations to handle large quantities of messages per second without compromising performance. Another notable use case involves Apache Kafka acting as an efficient stream/message ingestion engine for customer-facing applications, enabling internal analytics and real-time decision-making.

Additionally, Apache Kafka integrates seamlessly with big data technologies like Spark, making it a valuable addition to big data ecosystems. Organizations have successfully replaced legacy messaging solutions with Apache Kafka, thanks to its ability to serve as a messaging and data-streaming pipeline solution. It enables modern streaming API-based applications while ensuring high availability and clustering as a message broker between client-facing applications.

Moreover, Apache Kafka serves as an ingress and egress queue for big data systems, facilitating data storage and retrieval processes. It also acts as a reliable queue for frontend applications to retrieve data and analytics from MapR and HortonWorks. With over five years of being utilized in data pipelines, Apache Kafka has consistently demonstrated excellent performance and reliability.

In summary, Apache Kafka proves to be versatile and essential across various industries and use cases. It facilitates real-time communication, ensures data integrity, enables efficient event streaming, replaces classical messaging software, and supports high scalability and fault tolerance. With its robust capabilities, Apache Kafka continues to be the go-to solution for organizations seeking to streamline their data processing and communication systems.

Fault tolerance and high scalability: Users have consistently praised Apache Kafka for its fault tolerance and high scalability. Many reviewers have stated that Kafka excels in handling large volumes of data and is considered a workhorse in data streaming.

Ease of administration: Reviewers appreciate Kafka's ease of administration, noting that it offers an abundance of options for managing and maintaining queues. Multiple users have mentioned that the platform allows for easy expansion and configuration of cluster growth, making it straightforward to administer.

Real-time streaming capabilities: Kafka's real-time streaming capabilities are seen as a significant advantage by users. Several reviewers have highlighted the platform's ability to handle real-time data pipelines and its resistance to node failure within the cluster. This feature enables users to process asynchronous data efficiently and ensures continuous availability of the system.

Difficulty Monitoring Kafka Deployments: Some users have found it difficult to monitor their Kafka deployments and have expressed a desire for a separate monitoring dashboard that would provide them with better visibility into their topics and messages.

Steep Learning Curve for Creating Brokers and Topics: The process of creating brokers and topics in Kafka has been described as having a steep learning curve by some users, who believe that it could be simplified to make it more accessible.

Outdated Web User Interface: The web user interface of Kafka has not been updated in years, leading some users to feel that it lacks a streamlined user experience. They express the need for a more modern interface instead of relying on third-party tools.

Users have recommended using Apache Kafka for various messaging platform requirements. It integrates easily with multiple programming languages, offers stream processing capabilities, distributed data storage, and the ability to handle multiple requests simultaneously.

Another common recommendation is to consider Apache Kafka as a messaging broker due to its extensive feature set and guaranteed delivery of data to consumers. Users find it highly supported and widely used within the community.

Users also recommend Apache Kafka for streaming large amounts of data. They praise its scalability and ease of use, although they mention that manual rebalancing of partitions may be required when adding or deleting nodes. Additionally, users appreciate that Kafka allows connections between multiple producers and consumers with low resource consumption.

Overall, Apache Kafka is regarded as a practical choice for message processing systems, data streaming, and handling large volumes of data due to its stability, scalability, and diverse features.

Attribute Ratings

Reviews

(1-2 of 2)
Companies can't remove reviews or game the system. Here's why
Score 7 out of 10
Vetted Review
Verified User
Incentivized
Apache Kafka is used by our company as the "next generation" of messaging/data-streaming pipeline solutions, to replace our old legacy JMS-based messaging solution and enable the modern streaming API based applications. When it is used for messaging purposes, we shift the responsibility of data replay from the message source (publisher application) to the message destination (consumer application). This flexibility resolved the legacy issue of sources replaying the messages but impacting all subscribers to the same topic. When Kafka is used as the streaming pipeline, it is integrated seamlessly with the Spark/Spring Stream-based analytic solutions, as it is also a kind of distributed storage.
  • Undoubtedly, Kafka's high throughput and low latency feature are the highlights.
  • Kafka can scale horizontally very well.
  • The CLI and configuration details need to be worked out more in-depth. The naming convention of configuration is not so good and causing a lot of confusion. Sometimes there are too many configuration parameters to tune--requires the adopter to understand a lot of tricks like NFS entrapment, for example.
  • Lack of a good monitoring solution so far
When it is used as messaging, Apache Kafka is majorly preferred when the use case is Pub/Sub typed. It is not suitable to deal with the end-to-end queue use case nor the request/response paradigm. When Apache Kafka is used for streaming purposes, it doesn't have the native implementation of the query language, it is just a pipeline. You still need to put a lot of programming efforts into your streaming client-side to take care of those analytic requirements.
  • Kafka makes the messaging itself more reliable (as it has the distributed storage by itself and the message doesn't disappear even after it has been consumed).
  • Kafka can support a much higher volume use case, without too much extra pressure on the existed hardware.
Kafka is not a real messaging broker implementation as RabbitMQ or TIBCO EMS/JMS are. Although it can be used as messaging, we like the idea behind the Kafka (data isn't "passing by," instead it remains centra, so the client can revisit the data if necessary). This also relieves the pressure of keeping the old duplicated data copy on both the publisher and the consumer sides.
We are using the Apache open source version of Kafka. The community is a good place to ask questions. and we can get most of our problems resolved there.
Score 9 out of 10
Vetted Review
Verified User
Incentivized
We use Kafka for two key features: (1) keeping a buffer of all the incoming records that need to be stored in our data infrastructure, and (2) having a way to replay messages in case our data infrastructure loses some data.
The reason we need to buffer is that when our traffic spikes, we can have up to 1 million messages coming in that need to be processed in some form or fashion. To expect the back-end service to support that is crazy. Instead, we dump them into Kafka to give our data infrastructure time to ingest them. As for replaying events, sometimes the ingestion pipeline fails and drops some messages. I know - that's a huge mistake on our engineering team's part - but when it does happen Kafka has the ability to rewind and replay messages, resulting in delayed processing but no data loss.
  • Really easy to configure. I've used other message brokers such as RabbitMQ and compared to them, Kafka's configurations are very easy to understand and tweak.
  • Very scalable: easily configured to run on multiple nodes allowing for ease of parallelism (assuming your queues/topics don't have to be consumed in the exact same order the messages were delivered)
  • Not exactly a feature, but I trust Kafka will be around for at least another decade because active development has continued to be strong and there's a lot of financial backing from Confluent and LinkedIn, and probably many other companies who are using it (which, anecdotally, is many).
  • Doesn't work well with many small topics (on the order of thousands). There is a physical limit due to file handler usage on the number of topics Kafka can have before it grinds to a halt. This is not an issue for most people but it became an issue for us, as we need to have many, many topics and so we weren't able to fully migrate to Kafka except for a few of our big queues.
  • Lack of tenant isolation: if a partition on one node starts to lag on consume or publish, then all the partitions on that node will start to lag. That's what we've noticed and it's really frustrating to our customers that another customer's bad data affects them as well.
  • I don't have tooo much experience here, but I hear from other engineers on my team that the CLI admin tool is a real pain to use. For example, they say the arguments have no clear naming convention so they are hard to memorize and sometime you have to pass in undocumented properties.
Despite the disadvantages I list, I really believe that Kafka is the right choice whenever you need a queueing or message broker system. Kafka is way too battle-tested and scales too well to ever not consider it. The only exception is if your use case requires many, many small topics. Also, Kafka doesn't support delay queues out of the box and so you will need to "hack" it through special code on the consumer side.
  • Positive: bursts of traffic on special holidays are easy to handle because Kafka can absorb and buffer all the messages we need to process long enough to let an understaffed set of back-end services catch up on processing. Hard to put a number to it but we probably save $5k a month having fewer machines running.
  • Positive: makes decoupling the web and API services from the deeper back-end services easier by providing topics as an interface. This allowed us to split up our teams and have them develop independently of each other, speeding up software development.
  • Negative: our engineers have made mistakes such as accidentally dropping a few thousand messages due to the CLI being confusing to use, and as a result a customer lost some of their precious data. I'd say that was more our fault than Kafka's though.
I would only use RabbitMQ over Kafka when you need to have delay queues or tons of small topics/queues around.
I don't know too much about Pulsar - currently evaluating it - but it's supposed to have the same or better throughput while allowing for tons of queues. Stay tuned - I might update this review after we finish evaluating Pulsar. It's much less battle-tested though.
We use Heroku to host Pulsar and they have tons of Kafka experts that have helped us tune every little setting and give us advice via email or live chat (if you pay for premium support).
Return to navigation